Introduction to Open Data Science - Course Project

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Sun Dec 05 19:09:28 2021"

I am interested in this course because of my research and I am expecting that this course will be a strong introduction to data science. The course should give me enough knowledge to be able to use data science process in my research.

I learned about the course via an email sent to me by my department.

My GitHub repository can be found here.

Here is my course diary web page.


For this analysis, a data frame named learningAnalysis2014 is created. The CSV file named “learning2014.csv” is read. The data frame consist of 7 variables (gender ,Age ,attitude, deep, stra, surf, Points) and 166 observations. The data are from a survey of statistics students. The data include the global attitude of the students toward statistics and their exam points. deep”, “stra” and “surf” are combined variables by taking the mean. “attitude” was scaled based on Likert scale (1-5) by dividing the “Attitude” column by 10.

More information about the data can be found here (https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS3-meta.txt)

learningAnalysis2014 <- read.csv(file = 'data/learning2014.csv')
dim(learningAnalysis2014)
## [1] 166   7
str(learningAnalysis2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...

These are plots of all the relationship among the variables. From the visualization, we see some positive and negative correlations. An interesting correlation is attitude and points. As expected, we see a negative correlation with surf and deep. The most negative correlation is with deep and points. We also see that there are more female than male students. However, from the plots, there is no good fit between gender and points and there is no strong correlations.

The summary table gives us more information about the means of the variables.

pairs(learningAnalysis2014[-1], col = "red")

library(ggplot2)

library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
ggpairs(learningAnalysis2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

summary(learningAnalysis2014)
##     gender               Age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           Points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

I am using the following 3 variables to explain Points: ‘attitude’, ‘stra’, and ‘deep’.

ggpairs(learningAnalysis2014, lower = list(combo = wrap("facethist", bins = 20)))

my_model <- lm(Points ~ attitude + stra + deep, data = learningAnalysis2014)

summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude + stra + deep, data = learningAnalysis2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.5239  -3.4276   0.5474   3.8220  11.5112 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.3915     3.4077   3.343  0.00103 ** 
## attitude      3.5254     0.5683   6.203 4.44e-09 ***
## stra          0.9621     0.5367   1.793  0.07489 .  
## deep         -0.7492     0.7507  -0.998  0.31974    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 162 degrees of freedom
## Multiple R-squared:  0.2097, Adjusted R-squared:  0.195 
## F-statistic: 14.33 on 3 and 162 DF,  p-value: 2.521e-08

From the above summary table, we see that median for the residual is 0.5474. This would suggest that it will be difficult to predict points based on attitude, stra, and deep. However, it is very difficult to predict human behaviors and 0.5 residual could be acceptable in this case.

From the coefficients table, we see that the p-value for deep is high which would mean that deep does not affect much points. stra and attitude are better to predict points.

Below is a new regression where I have removed deep.

my_model2 <- lm(Points ~ attitude + stra, data = learningAnalysis2014)

summary(my_model2)
## 
## Call:
## lm(formula = Points ~ attitude + stra, data = learningAnalysis2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Based on the above summary table, we see that by removing deep, the fit was not improved and actually got worse.

Below are diagnostic plots function: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage. The QQ-plot shows a reasonable fit which shows good ‘normality’. The Residuals vs Fitted values plot shows that it is reasonable since it shows randomness. The Residuals vs Leverage plot shows regular error which would imply regular leverage.

my_model2 <- lm(Points ~ attitude + stra, data = learningAnalysis2014)

par(mfrow = c(2,2))
plot(my_model2, which = c(1,2,5))


Logistic regression

Read new data frame named alc.csv to studentAlc and print variable names.

#studentAlc <- read.table("~/IODS-project/IODS-project/data/pormath.csv", sep = ",", header = TRUE)

studentAlc <- read.table("~/IODS-project/IODS-project/data/alc.csv", sep = ",", header = TRUE)

dim(studentAlc)
## [1] 370  35
colnames(studentAlc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "guardian"   "traveltime" "studytime"  "schoolsup" 
## [16] "famsup"     "activities" "nursery"    "higher"     "internet"  
## [21] "romantic"   "famrel"     "freetime"   "goout"      "Dalc"      
## [26] "Walc"       "health"     "failures"   "paid"       "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

Data Set Information:

“This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful (see paper source for more details).”

The above information is from UCI Machine Learning Repository.

More information about the data sets can be found here:(https://archive.ics.uci.edu/ml/datasets/Student+Performance)

I chose the following 4 variables to predict high use of alcohol: G3, absences, Pstatus, and health. I have chosen these variables thinking that they would be easily available to a school without the need to survey students.

G3:

My assumption here is that low grade is an indication of high alcohol use since high use of alcohol can affect cognitive functions such as memory.

absences:

High number of absences could be an indication of high alcohol use as it would affect one’s schedule. Absenses could be due to being sick after consuming a lot of alcohol.

goout

Going out with friend a lot might raise alcohol consumption since there will be more opportunities to consume alcohol.

health:

Poor health could be an indication of high alcohol consumption. Alcohol can negatively affect physical and mental health.

These are plots of all variables

# access the tidyverse libraries tidyr, dplyr, ggplot2
library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
# draw a bar plot of each variable
gather(studentAlc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

Box plots of my chosen variables vs high-use of alcohol

# initialize a plot of high_use and G3
g1 <- ggplot(studentAlc, aes(x = high_use, y = G3, col = sex))

# define the plot as a boxplot and draw it
g1 + geom_boxplot() + ylab("grade") + ggtitle("Student final grade by alcohol consumption and sex")

# initialise a plot of high_use and absences
g2 <- ggplot(studentAlc, aes(x = high_use, y = absences, col = sex))

# define the plot as a boxplot and draw it
g2 + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")

# initialize a plot of high_use and 
g3 <- ggplot(studentAlc, aes(x = high_use, y = goout, col = sex))

# define the plot as a boxplot and draw it
g3 + geom_boxplot() + ylab("going out") + ggtitle("Student going out with friends by alcohol consumption and sex")

g4 <- ggplot(studentAlc, aes(x = high_use, y = health, col = sex))

# define the plot as a boxplot and draw it
g4 + geom_boxplot() + ylab("health") + ggtitle("Student health by alcohol consumption and sex")

Numerical exploration of my chosen variables

# produce summary statistics by grade
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_grade = mean(G3))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_grade
##   <chr> <lgl>    <int>      <dbl>
## 1 F     FALSE      154       11.4
## 2 F     TRUE        41       11.8
## 3 M     FALSE      105       12.3
## 4 M     TRUE        70       10.3
# produce summary statistics by absences
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_absences = mean(absences))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_absences
##   <chr> <lgl>    <int>         <dbl>
## 1 F     FALSE      154          4.25
## 2 F     TRUE        41          6.85
## 3 M     FALSE      105          2.91
## 4 M     TRUE        70          6.1
# produce summary statistics by going out
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_going_out = mean(goout))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_going_out
##   <chr> <lgl>    <int>          <dbl>
## 1 F     FALSE      154           2.95
## 2 F     TRUE        41           3.39
## 3 M     FALSE      105           2.70
## 4 M     TRUE        70           3.93
# produce summary statistics by health
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_health = mean(health))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_health
##   <chr> <lgl>    <int>       <dbl>
## 1 F     FALSE      154        3.37
## 2 F     TRUE        41        3.39
## 3 M     FALSE      105        3.67
## 4 M     TRUE        70        3.93

Interpretation of the above plots and tables

We see that grades are lower for male when high_use of alcohol is true. Female seems to be less negatively affected. Final grade is not as a strong predictor as I thought. Absences and high_use of alcohol seems to have a good correlation. Female seems to have a little more absences than male when high_use of alcohol is true. going out shows some correlation for male and female for high_use of alcohol. The more a student go out the more he or she consume alcohol. Health does not seem to have a strong correlation with high_use of alcohol. I would have thought that there would have been a stronger correlation. It is possible that the participants did not truthfully answer this question or that participants are not aware of their overall health (mental and physical). This seems to be especially true for male. Female have a broader range of answers with 25% of them describing their health below 2 (Q1 high_use = true).

Logistic regression to statistically explore the relationship between my chosen variables

For this model, high_use is the target variable and final grade, absences, going out, and health are the predictors. I did not include sex because if a school needs to predict high_use of alcohol, sex might add an unnecessary bias. For instance, male students might be watched more carefully than female students because male seems to have higher high_use of alcohol.

# find the model with glm()
m <- glm(high_use ~ G3 + absences + goout + health, data = studentAlc, family = "binomial")

# print out a summary of the model
summary(m)
## 
## Call:
## glm(formula = high_use ~ G3 + absences + goout + health, family = "binomial", 
##     data = studentAlc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8342  -0.7505  -0.5508   0.9357   2.3172  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.70363    0.79076  -4.684 2.82e-06 ***
## G3          -0.03852    0.03935  -0.979 0.327587    
## absences     0.07436    0.02212   3.362 0.000773 ***
## goout        0.72459    0.11941   6.068 1.29e-09 ***
## health       0.15179    0.09209   1.648 0.099275 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 386.07  on 365  degrees of freedom
## AIC: 396.07
## 
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m)
## (Intercept)          G3    absences       goout      health 
## -3.70363174 -0.03852050  0.07435996  0.72459398  0.15179099
# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR       2.5 %    97.5 %
## (Intercept) 0.0246339 0.004960425 0.1109072
## G3          0.9622120 0.890726272 1.0397748
## absences    1.0771945 1.033123265 1.1280521
## goout       2.0638929 1.642693029 2.6260422
## health      1.1639169 0.973629481 1.3981834

Interpretation of the above model

From the model P-values, we see that final absences and going out (goout) are likely relevant variables to explain high_use of alcohol. However, final grade and health show that the data are providing little evidence that these variables are needed to explain high_use.

From the Odd Ratio table, we see that absences, going out, and health have an OR greater than 1. This would imply that these variables are positively associated with high_use of alcohol. Going out has a Odd Ratio of 2 showing a very strong positive association with high_use of alcohol. Final grade is close to 1 as well. This would imply that the positive association is not as strong as the other variables.

It seems that most of my chosen variables have some positive association with high_use. Therefore, they could be used to correctly predict high_use.

Predictive power of my model

# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# see the last ten original classes, predicted probabilities, and class predictions
select(studentAlc, G3, absences, goout, health, high_use, probability, prediction) %>% tail(10)
##     G3 absences goout health high_use probability prediction
## 361  2        7     3      3     TRUE  0.34728425      FALSE
## 362 11        3     3      3     TRUE  0.21838164      FALSE
## 363 10        2     1      5     TRUE  0.07895958      FALSE
## 364 16        4     4      2     TRUE  0.30564438      FALSE
## 365 12        3     2      3    FALSE  0.11524638      FALSE
## 366  8        4     3      3     TRUE  0.25252304      FALSE
## 367 14        0     2      5    FALSE  0.11559977      FALSE
## 368  9        4     4      5     TRUE  0.47613178      FALSE
## 369 10        8     4      2     TRUE  0.42751451      FALSE
## 370  0        0     2      5    FALSE  0.18309931      FALSE
# tabulate the target variable versus the predictions
table(high_use = studentAlc$high_use, prediction = studentAlc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   233   26
##    TRUE     65   46
# initialize a plot of 'high_use' versus 'probability' in 'studentAlc'
g <- ggplot(studentAlc, aes(x = probability, y = high_use, col = prediction))

# define the geom as points and draw the plot
g + geom_point()

# tabulate the target variable versus the predictions
table(high_use = studentAlc$high_use, prediction = studentAlc$prediction) %>% prop.table %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.62972973 0.07027027 0.70000000
##    TRUE  0.17567568 0.12432432 0.30000000
##    Sum   0.80540541 0.19459459 1.00000000
# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2459459

The goal of a loss function is to get a small number as possible. The loss function here is around 0.25. Therefore, the model does not predict correctly 25% of the time. The model perform better than the simple guessing strategy.

10-fold cross-validation

Bonus

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459

The result of the cross-validation (K=10) is similar to the previous result of the loss function. My model seems to produce a similar result as the model found in DataCamp.

Super Bonus

Many variables

# New model with many variables
m <- glm(high_use ~ G3 + absences + goout + health + studytime + failures + freetime + famrel, data = studentAlc, family = "binomial")

# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR      2.5 %    97.5 %
## (Intercept) 0.1112482 0.01462431 0.7969864
## G3          1.0004152 0.91884143 1.0909553
## absences    1.0707813 1.02620258 1.1197948
## goout       2.0410506 1.59736372 2.6461037
## health      1.1744770 0.97382116 1.4240680
## studytime   0.6138411 0.43227534 0.8559677
## failures    1.3512750 0.85082981 2.1704783
## freetime    1.1869927 0.89658690 1.5765989
## famrel      0.6655703 0.49995249 0.8817370
# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2324324
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2648649

New model with 2 variable only

# New model with 2 variables
m <- glm(high_use ~ absences + goout, data = studentAlc, family = "binomial")

# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR      2.5 %     97.5 %
## (Intercept) 0.02644199 0.01082308 0.06045431
## absences    1.07930582 1.03477610 1.13034717
## goout       2.08298255 1.66258410 2.64170298
# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2378378
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459

It seems that the result is not greatly improved when going from many variables to only 2 variables.


Chapter 4: Clustering and Classification

Part 2: explore the structure and the dimensions of the data

Loading the Boston data.

# set plots size
knitr::opts_chunk$set(fig.width=16, fig.height=10) 


#code from DataCamp.

#access the MASS package
library (dplyr)
library(MASS)
library(corrplot)
library(tidyr)


#load the data
data("Boston")

#explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Description:

The Boston data set represent the housing values in Suburbs of Boston.

The Boston data frame has 506 rows and 14 columns.

This data frame contains the following columns:

crim = per capita crime rate by town
zn = proportion of residential land zoned for lots over 25,000 sq.ft
indus = proportion of non-retail business acres per town
chas = Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
nox = nitrogen oxides concentration (parts per 10 million)
rm = average number of rooms per dwelling
age = proportion of owner-occupied units built prior to 1940
dis = weighted mean of distances to five Boston employment centres
rad = index of accessibility to radial highways
tax = full-value property-tax rate per $10,000
ptratio = pupil-teacher ratio by town
black = 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
lstat = lower status of the population (percent)
medv = median value of owner-occupied homes in $1000s

Source:

Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. J. Environ. Economics and Management 5, 81–102.

Belsley D.A., Kuh, E. and Welsch, R.E. (1980) Regression Diagnostics. Identifying Influential Data and Sources of Collinearity. New York: Wiley.

Part 3: graphical overview of the data

#plot matrix of the variables
pairs(Boston, gap=1/30)

# MASS, corrplot, tidyr and Boston dataset are available

# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)

# print the correlation matrix
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax ptratio
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58    0.29
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31   -0.39
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72    0.38
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04   -0.12
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67    0.19
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29   -0.36
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51    0.26
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53   -0.23
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91    0.46
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00    0.46
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46    1.00
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44   -0.18
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54    0.37
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47   -0.51
##         black lstat  medv
## crim    -0.39  0.46 -0.39
## zn       0.18 -0.41  0.36
## indus   -0.36  0.60 -0.48
## chas     0.05 -0.05  0.18
## nox     -0.38  0.59 -0.43
## rm       0.13 -0.61  0.70
## age     -0.27  0.60 -0.38
## dis      0.29 -0.50  0.25
## rad     -0.44  0.49 -0.38
## tax     -0.44  0.54 -0.47
## ptratio -0.18  0.37 -0.51
## black    1.00 -0.37  0.33
## lstat   -0.37  1.00 -0.74
## medv     0.33 -0.74  1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex = 0.6)

corrplot(cor_matrix, method = 'number', type="upper")

Looking at the data, we see that medv has a positive correlation with rm and a negative correlation with lstat. This make sense as a house would have a higher price based on the number of rooms. As well, house located in a lower status population area would have a lower price. We also see that nox has a positive correlation with indus and age. nox has a negative correlation with dis. The higher the number of industry, the higher the emission of nitrogen oxides. Older houses would probably be mostly in old industrial areas of the city. medv has a positive correlation with crim but only at 0.33. At first glance, the value of houses is mainly due to th e number of rooms per dwelling. Taxes has a small negative correlation with medv.

Part 4: standardize the dataset

During scaling of the data, the mean is subtracted from the column and the difference is divided by the standard deviation. This is an example of the data before and after scaling.
Before:
row 1: rm = 6.575
After:
row 1: rm = 0.413262920

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

Use the quantiles as the break points in the categorical variable and divide the dataset to train and test sets

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]
# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

Now, 80% of the data belongs to the train set. Saved the crime categories from the test set and removed the categorical crime variable from the test dataset.

Part 5: linear discriminant analysis

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2599010 0.2475248 0.2376238 0.2549505 
## 
## Group means:
##                  zn      indus        chas        nox         rm        age
## low       0.8825822 -0.8689290 -0.12234430 -0.8632569  0.3702401 -0.8631880
## med_low  -0.1159242 -0.2590069 -0.03610305 -0.5456276 -0.1567915 -0.3065453
## med_high -0.3693281  0.1448833  0.21980846  0.3237317  0.1400878  0.3927074
## high     -0.4872402  1.0170891 -0.15765625  1.0529524 -0.4491714  0.7950054
##                 dis        rad        tax     ptratio       black       lstat
## low       0.8524106 -0.6963951 -0.7322339 -0.44576521  0.37868053 -0.72405087
## med_low   0.3025489 -0.5454537 -0.4423189  0.04271073  0.31656511 -0.13756544
## med_high -0.3444740 -0.4231898 -0.3194627 -0.24409402  0.06926651 -0.01144215
## high     -0.8389947  1.6384176  1.5142626  0.78111358 -0.85551078  0.92257977
##                 medv
## low       0.44885281
## med_low  -0.03346774
## med_high  0.17968812
## high     -0.75781458
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.09927556  0.65553764 -0.87071206
## indus    0.02369128 -0.19029744  0.22572273
## chas    -0.12984403 -0.09395308  0.09481701
## nox      0.44316495 -0.87292036 -1.32893963
## rm      -0.11265613 -0.09497230 -0.19176527
## age      0.15853240 -0.34696842 -0.04266684
## dis     -0.04520253 -0.30539317  0.08192110
## rad      3.36049507  0.91950761 -0.22961608
## tax      0.03862638  0.02341945  0.61520967
## ptratio  0.09801333 -0.08331767 -0.08901837
## black   -0.11728001  0.04471435  0.13288309
## lstat    0.21445469 -0.25586379  0.29511619
## medv     0.18554061 -0.54070962 -0.22428537
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9587 0.0313 0.0101
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

Part 6: Save the crime categories

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17       4        1    0
##   med_low    6      17        3    0
##   med_high   0       9       19    2
##   high       0       0        0   24

The best predictions are for high. Also, the model does not predict very well med_high. We can see in the LD plot that med_high and med_low are in the same side. The model predicts high correctly because the distance between high and low, med_low, and med_high is large.

Part 7: Reload the Boston dataset and standardize the dataset

# load MASS and Boston
library(MASS)
data('Boston')

# scale data
Boston = as.data.frame(scale(Boston))

# euclidean distance matrix
dist_eu <- dist(Boston)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(Boston, method = 'manhattan')

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

cluster with centers = 3

# k-means clustering
km <-kmeans(Boston, centers = 3)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

cluster with centers = 1

# k-means clustering
km <-kmeans(Boston, centers = 1)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

cluster with centers = 2

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

The cluster with centers = 3 seems to be the best.

Bonus and super bonus

library(ggplot2)
# Boston dataset is available
set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)


#install.packages("plotly")
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')

Chapter 5: Dimensionality reduction techniques

Part 1: Show a graphical overview of the data and show summaries of the variables in the data

# uploading dataset from: http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt
human <- read.csv("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt", sep = ",",  header = TRUE)

# code is from DataCamp.
# library
library(GGally)
library(corrplot)

# visualize the 'human_' variables
ggpairs(human)

# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot

# summary of the data
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50